34 research outputs found

    When Does Disengagement Correlate with Performance in Spoken Dialog Computer Tutoring?

    Get PDF
    In this paper we investigate how student disengagement relates to two performance metrics in a spoken dialog computer tutoring corpus, both when disengagement is measured through manual annotation by a trained human judge, and also when disengagement is measured through automatic annotation by the system based on a machine learning model. First, we investigate whether manually labeled overall disengagement and six different disengagement types are predictive of learning and user satisfaction in the corpus. Our results show that although students’ percentage of overall disengaged turns negatively correlates both with the amount they learn and their user satisfaction, the individual types of disengagement correlate differently: some negatively correlate with learning and user satisfaction, while others don’t correlate with eithermetric at all. Moreover, these relationships change somewhat depending on student prerequisite knowledge level. Furthermore, using multiple disengagement types to predict learning improves predictive power. Overall, these manual label-based results suggest that although adapting to disengagement should improve both student learning and user satisfaction in computer tutoring, maximizing performance requires the system to detect and respond differently based on disengagement type. Next, we present an approach to automatically detecting and responding to user disengagement types based on their differing correlations with correctness. Investigation of ourmachine learningmodel of user disengagement shows that its automatic labels negatively correlate with both performance metrics in the same way as the manual labels. The similarity of the correlations across the manual and automatic labels suggests that the automatic labels are a reasonable substitute for the manual labels. Moreover, the significant negative correlations themselves suggest that redesigning ITSPOKE to automatically detect and respond to disengagement has the potential to remediate disengagement and thereby improve performance, even in the presence of noise introduced by the automatic detection process

    Exploring affect-context dependencies for adaptive system development

    Get PDF
    We use χ2 to investigate the context dependency of student affect in our computer tutoring dialogues, targeting uncertainty in student answers in 3 automatically monitorable contexts. Our results show significant dependencies between uncertain answers and specific contexts. Identification and analysis of these dependencies is our first step in developing an adaptive version of our dialogue system.

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Using bigrams to identify relationships between student certainness states and tutor responses in a spoken dialogue corpus

    No full text
    We use n-gram techniques to identify dependencies between student affective states of certainty and subsequent tutor dialogue acts, in an annotated corpus of human-human spoken tutoring dialogues. We first represent our dialogues as bigrams of annotated student and tutor turns. We next use χ 2 analysis to identify dependent bigrams. Our results show dependencies between many student states and subsequent tutor dialogue acts. We then analyze the dependent bigrams and suggest ways that our current computer tutor can be enhanced to adapt its dialogue act generation based on these dependencies.

    Adapting to Student Uncertainty Improves Tutoring Dialogues

    No full text
    Abstract. This study shows that affect-adaptive computer tutoring can significantly improve performance on learning efficiency and user satisfaction. We compare two different student uncertainty adaptations which were designed, implemented and evaluated in a controlled experiment using four versions of a wizarded spoken dialogue tutoring system: two adaptive systems used in two experimental conditions (basic and empirical), and two non-adaptive systems used in two control conditions (normal and random). In prior work we compared learning gains across the four systems; here we compare two other important performance metrics: learning efficiency and user satisfaction. We show that the basic adaptive system outperforms the normal (non-adaptive) and empirical (adaptive) systems in terms of learning efficiency. We also show that the empirical (adaptive) and random (non-adaptive) systems outperform the basic adaptive system in terms of user perception of tutor response quality. However, only the basic adaptive system shows a positive correlation between learning and user perception of decreased uncertainty

    Investigating human tutor responses to student uncertainty for adaptive system development

    No full text
    Abstract. We use a χ 2 analysis on our spoken dialogue tutoring corpus to investigate dependencies between uncertain student answers and 9 dialogue acts the human tutor uses in his response to these answers. Our results show significant dependencies between the tutor’s use of some dialogue acts and the uncertainty expressed in the prior student answer, even after factoring out the answer’s (in)correctness. Identification and analysis of these dependencies is part of our empirical approach to developing an adaptive version of our spoken dialogue tutoring system that responds to student affective states as well as to student correctness.

    Adapting toMultiple AffectiveStatesin Spoken Dialogue

    No full text
    We evaluate a wizard-of-oz spoken dialogue system that adapts to multiple user affective states in real-time: user disengagement and uncertainty. We compare this version with the priorversionofoursystem,whichonlyadapts to user uncertainty. Our analysis investigates how iteratively adding new affect adaptation to an existing affect-adaptive system impacts global and local performance. We find a significant increase in motivation for users who most frequently received the disengagement adaptation. Moreover, responding to disengagementbreaksitsnegativecorrelationswith tasksuccessandusersatisfaction,reducesuncertainty levels, and reduces the likelihood of continued disengagement.

    Using PerformanceTrajectoriestoAnalyzethe ImmediateImpactofUser StateMisclassificationin an Adaptive Spoken Dialogue System

    Get PDF
    We present a method of evaluating the immediate performance impact of user state misclassifications in spoken dialogue systems. We illustrate the method with a tutoring systemthatadaptstostudentuncertaintyoverand above correctness. First we define a ranking of user states representing local performance. Second, we compare user state trajectories when the first state is accurately classified versus misclassified. Trajectories are quantified using a previously proposed metricrepresentingthelikelihoodoftransitioning fromoneuserstatetoanother. Comparison of thetwosetsoftrajectoriesshowswhetheruser state misclassifications change the likelihood of subsequent higher or lower ranked states, relativetoaccurateclassification. Ourtutoring system results illustrate the case where user statemisclassificationincreasesthelikelihood of negative performance trajectories as compared toaccurate classification.

    A user modeling-based performance analysis of a wizarded uncertainty-adaptive dialogue system corpus

    No full text
    Motivated by prior spoken dialogue system research in user modeling, we analyze interactions between performance and user class in a dataset previously collected with two wizarded spoken dialogue tutoring systems that adapt to user uncertainty. We focus on user classes defined by expertise level and gender, and on both objective (learning) and subjective (user satisfaction) performance metrics. We find that lower expertise users learn best from one adaptive system but prefer the other, while higher expertise users learned more from one adaptive system but didn't prefer either. Female users both learn best from and prefer the same adaptive system, while males preferred one adaptive system but didn't learn more from either. Our results yield an empirical basis for future investigations into whether adaptive system performance can improve by adapting to user uncertainty differently based on user class. Copyright © 2009 ISCA
    corecore